Evolutionary Hessian Learning: Forced Optimal Covariance Adaptive Learning (FOCAL)
نویسندگان
چکیده
The Covariance Matrix Adaptation Evolution Strategy (CMAES) has been the most successful Evolution Strategy at exploiting covariance information; it uses a form of Principle Component Analysis which, under certain conditions, is suggested to converge to the correct covariance matrix, formulated as the inverse of the mathematically well-defined Hessian matrix. However, in practice, there exist conditions where CMAES converges to the global optimum (accomplishing its primary goal) while it does not learn the true covariance matrix (missing an auxiliary objective), likely due to step-size deficiency. These circumstances can involve high-dimensional landscapes (n & 30) with large condition numbers (ξ & 10). This paper introduces a novel technique entitled Forced Optimal Covariance Adaptive Learning (FOCAL), with the explicit goal of determining the Hessian at the global basin of attraction. It begins by introducing theoretical foundations to the inverse relationship between the learned covariance and the Hessian matrices. FOCAL is then introduced and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and experimental Quantum Control systems, which are observed to possess a non-separable, non-quadratic search landscape. The recovered Hessian forms are corroborated by physical knowledge of the systems. This study constitutes an example for natural computing successfully serving other branches of natural sciences, and introducing at the same time a powerful generic method for any high-dimensional continuous search seeking landscape information.
منابع مشابه
What Does the Evolution Path Learn in CMA-ES?
The Covariance matrix adaptation evolution strategy (CMA-ES) evolves a multivariate Gaussian distribution for continuous optimization. The evolution path, which accumulates historical search direction in successive generations, plays a crucial role in the adaptation of covariance matrix. In this paper, we investigate what the evolution path approximates in the optimization procedure. We show th...
متن کاملConvergence properties and data efficiency of the minimum error entropy criterion in ADALINE training
Recently, we have proposed the minimum error entropy (MEE) criterion as an information theoretic alternative to the widely used mean square error criterion in supervised adaptive system training. For this purpose, we have formulated a nonparametric estimator for Renyi’s entropy that employs Parzen windowing. Mathematical investigation of the proposed entropy estimator revealed interesting insig...
متن کاملAdaptive exploration through covariance matrix adaptation enables developmental motor learning
2 FLOWERS Team INRIA Bordeaux Sud-Ouest Talence, France Abstract The “Policy Improvement with Path Integrals” (PI2) [25] and “Covariance Matrix Adaptation Evolutionary Strategy” [8] are considered to be state-of-the-art in direct reinforcement learning and stochastic optimization respectively. We have recently shown that incorporating covariance matrix adaptation into PI2– which yields the PICM...
متن کاملPerfect Tracking of Supercavitating Non-minimum Phase Vehicles Using a New Robust and Adaptive Parameter-optimal Iterative Learning Control
In this manuscript, a new method is proposed to provide a perfect tracking of the supercavitation system based on a new two-state model. The tracking of the pitch rate and angle of attack for fin and cavitator input is of the aim. The pitch rate of the supercavitation with respect to fin angle is found as a non-minimum phase behavior. This effect reduces the speed of command pitch rate. Control...
متن کاملRMSProp and equilibrated adaptive learning rates for non-convex optimization
Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help u...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1112.4454 شماره
صفحات -
تاریخ انتشار 2011